641 research outputs found
Spectral analysis of Markov kernels and application to the convergence rate of discrete random walks
Let be a Markov chain on a measurable space \X with
transition kernel and let V:\X\r[1,+\infty). The Markov kernel is
here considered as a linear bounded operator on the weighted-supremum space
\cB_V associated with . Then the combination of quasi-compactness
arguments with precise analysis of eigen-elements of allows us to estimate
the geometric rate of convergence of to its
invariant probability measure in operator norm on \cB_V. A general procedure
to compute for discrete Markov random walks with identically
distributed bounded increments is specified
Additional material on bounds of -spectral gap for discrete Markov chains with band transition matrices
We analyse the -convergence rate of irreducible and aperiodic
Markov chains with -band transition probability matrix and with
invariant distribution . This analysis is heavily based on: first the
study of the essential spectral radius of
derived from Hennion's quasi-compactness criteria; second
the connection between the spectral gap property (SG) of on
and the -geometric ergodicity of . Specifically, (SG)
is shown to hold under the condition \alpha\_0 := \sum\_{{m}=-N}^N
\limsup\_{i\rightarrow +\infty} \sqrt{P(i,i+{m})\, P^*(i+{m},i)}\ \textless{}\,
1. Moreover . Simple conditions
on asymptotic properties of and of its invariant probability distribution
to ensure that \alpha\_0\textless{}1 are given. In particular this
allows us to obtain estimates of the -geometric convergence rate
of random walks with bounded increments. The specific case of reversible is
also addressed. Numerical bounds on the convergence rate can be provided via a
truncation procedure. This is illustrated on the Metropolis-Hastings algorithm
Computable bounds of -spectral gap for discrete Markov chains with band transition matrices
We analyse the -convergence rate of irreducible and aperiodic
Markov chains with -band transition probability matrix and with
invariant distribution . This analysis is heavily based on: first the
study of the essential spectral radius of
derived from Hennion's quasi-compactness criteria; second
the connection between the Spectral Gap property (SG) of on
and the -geometric ergodicity of . Specifically, (SG)
is shown to hold under the condition \alpha\_0 := \sum\_{{m}=-N}^N
\limsup\_{i\rightarrow +\infty} \sqrt{P(i,i+{m})\, P^*(i+{m},i)}\ \textless{}\,
1 Moreover . Effective bounds on
the convergence rate can be provided from a truncation procedure.Comment: in Journal of Applied Probability, Applied Probability Trust, 2016.
arXiv admin note: substantial text overlap with arXiv:1503.0220
A uniform Berry--Esseen theorem on -estimators for geometrically ergodic Markov chains
Let be a -geometrically ergodic Markov chain. Given some
real-valued functional , define
,
. Consider an estimator
, that is, a measurable function of the observations satisfying
with
some sequence of real numbers going to zero. Under some
standard regularity and moment assumptions, close to those of the i.i.d. case,
the estimator satisfies a Berry--Esseen theorem uniformly with
respect to the underlying probability distribution of the Markov chain.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ347 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Regular perturbation of V -geometrically ergodic Markov chains
In this paper, new conditions for the stability of V-geometrically ergodic
Markov chains are introduced. The results are based on an extension of the
standard perturbation theory formulated by Keller and Liverani. The continuity
and higher regularity properties are investigated. As an illustration, an
asymptotic expansion of the invariant probability measure for an autoregressive
model with i.i.d. noises (with a non-standard probability density function) is
obtained
On the asymptotic analysis of Littlewood's reliability model for modular software
International audienceWe consider a Markovian model, proposed by Littlewood, to assess the reliability of a modular software. Speci cally , we are interested in the asymptotic properties of the corresponding failure point process. We focus on its time-stationary version and on its behavior when reliability growth takes place. We prove the convergence in distribution of the failure point process to a Poisson process. Additionally, we provide a convergence rate using the distance in variation. This is heavily based on a similar result of Kabanov, Liptser and Shiryayev, for a doubly-stochastic Poisson process where the intensity is governed by a Markov process
Linear Dynamics for the state vector of Markov chain functions
The attached file may be somewhat different from the published versionInternational audienceLet (\vp(X_n))_n be a function of a finite-state Markov chain . In this note, we investigate under which conditions the random variable \vp(X_n) have the same distribution as (for every ), where is a Markov chain with fixed transition probability matrix. In other words, for a deterministic function \vp, we investigate the conditions under which is \textit{weakly lumpable for the state vector}. We show that the set of all probability distributions of such that is weakly lumpable for the state vector can be finitely generated. The connections between our definition of lumpability and usual one's, as the proportional dynamics property, are discussed
Towards a filter-based EM-algorithm for parameter estimation of Littlewood's software reliability model
International audienceIn this paper, we deal with a continuous-time software reliability model designed by Littlewood. This model may be thought of as a partially observed Markov process. The EM-algorithm is a standard way to estimate the parameters of processes with missing data. The E-step requires the computation of basic statistics related to observed/hidden processes. In this paper, we provide finite-dimensional non-linear filters for these statistics using the innovations method. This allows to plan the use of the filter-based EM-algorithm developed by Elliott
Strong convergence of a class of non-homogeneous Markov arrival processes to a Poisson process
The attached file may be somewhat different from the published versionInternational audienceIn this paper, we are concerned with a time-inhomogencous version of the Markovian arrival process. Under the assumption that the environment process is asymptotically time-homogeneous, we discuss a Poisson approximation of the counting process of arrivals when the arrivals are rare. We provide a rate of convergence for the distance in variation. Poisson-type approximation for the process resulting of a special marking procedure of the arrivals is outlined
Une nouvelle description des possibilités d'agrégation d'une chaîne de Markov
International audienceOn considère une chaîne de Markov (homogène) à espace d'état E fini. On est souvent amener à introduire une chaîne dite agrégée, dont l'espace d'état est constitué d'agrégats d'éléments de E. Malheureusement, cette chaîne ne possède plus, en général, la propriété de Markov (homogène). Des études récentes s'intéressent aux calcul de l'ensemble des lois initiales pour lesquelles la chaîne agrégée est markovienne. Dans ce travail, on montre que si la matrice de transition P du modèle initial est irréductible alors toute loi initiale permettant la conservation de la propriété de Markov diffère de la loi invariante de P par un vecteur d'un espace dit d'observabilité trajectorielle. En particulier cela permet de préciser un résultat récent de Peng
- …